Aiming at the problems that the high-capacity steganography model based on encoding-decoding network has weak robustness and can not resist noise attack and channel compression, a high-capacity robust image steganography scheme based on encoding-decoding network was proposed. In the proposed scheme, encoder, decoder and discriminator based on Densely connected convolutional Network (DenseNet) were designed. The secret information and the carrier image were jointly encoded into a steganographic image by the encoder, the secret information was extracted by the decoder, and the discriminator was used to distinguish between carrier images and steganographic images. A noise layer was added between the encoder and the decoder; Dropout, JPEG compression, Gaussian blur, Gaussian noise and salt and pepper noise were used to simulate a real environment with various kinds of noise attacks. The steganographic image output by the encoder was processed by different kinds of noise and decoded by the decoder. Through training the model, the secret information could be extracted from the noise-processed steganographic image by the decoder, so that the noise attacks could be resisted. Experiment results show that the steganographic capacity of the proposed scheme reaches 0.45 - 0.95 bpp on 360×360 pixel images, and the relative embedding capacity is improved by 2.04 times compared to the suboptimal robust steganographic scheme; the decoding accuracy reaches 0.72 - 0.97, and compared with the steganography without noise layer, the average decoding accuracy is improved by 44 percentage points. The proposed scheme not only guarantees high embedding quantity and high coding image quality, but also has stronger anti-noise capability.
The algorithm platform, as the implementation way of automatic machine learning, has attracted the wide attention in recent years. However, the business processes of these platforms need to be built manually, and these platforms are faced with inflexible model calling and the incapability of customized automatic algorithm construction for specific business requirements. To address these problems, an algorithm path self-assembling model for business requirements was proposed. Firstly, the sequence features and structural features of code were modeled simultaneously based on Graph Convolutional Network (GCN) and word2vec representation. Secondly, functions in the algorithm set were further discovered through a clustering model, and the obtained function subsets were used for the preparation of the path discovery of algorithm components between subsets. Finally, based on the relationship discovery model and ranking model trained with prior knowledge, the self-assembled paths of candidate code components were mined, thus realizing the algorithm code self-assembling. Using the proposed evaluation indicators for comparison and analysis, the best result of the proposed algorithm path self-assembling model is 0.8, while that of the baseline model Okapi BM25+word2vec is 0.21. To a certain extent, the proposed model solves the problem of missing code structure and semantic information in traditional code representation methods and lays the foundation for the research of refinement of algorithm process self-assembling and automatic construction of algorithm pipelines.
Correctly identifying the entities in judgment documents is an important foundation for building legal knowledge graph and realizing smart courts. However, commonly used Named Entity Recognition (NER) models cannot solve the problem of polysemous word representation and entity boundary recognition errors in judgment document well. In order to effectively improve the recognition effect of various entities in the judgment documents, a Bidirectional Long Short-Term Memory with a sequential Conditional Random Field (BiLSTM-CRF) based on Joint Learning and BERT (Bidirectional Encoder Representation from Transformers) (JLB-BiLSTM-CRF) model was proposed. Firstly, the input character sequence was encoded by BERT to enhance the representation ability of word vectors. Then, the long text information was modeled by BiLSTM network, and the NER tasks and Chinese Word Segmentation (CWS) tasks were jointly trained to improve the boundary recognition rate of entities. Experimental results show that this model has the precision of 94.36%, the recall of 94.94%, and the F1 score of 94.65% on the test set, which are 1.05 percentage points, 0.48 percentage points and 0.77 percentage points higher than those of BERT-BiLSTM-CRF model respectively, verifying the effectiveness of JLB-BiLSTM-CRF model in NER tasks for judgment documents.
Social media highly facilitates people’s daily communication and disseminating information, but it is also a breeding ground for rumors. Therefore, how to automatically monitor rumor dissemination in the early stage is of great practical significance, but the existing detection methods fail to take full advantage of the semantic information of the microblog information propagation graph. To solve this problem, based on Heterogeneous graph Attention Network (HAN), a rumor monitoring model was built, namely MicroBlog-HAN. In the model, a hierarchical attention mechanism including node-level attention and semantic-level attention was adopted. First, the neighbors of microblog nodes were combined by the node-level attention to generate two groups of node embeddings with specific semantics. After that, different semantics were fused by the semantic-level attention to obtain the final node embeddings of microblog, which were then treated as the classifier’s input to perform the binary classification task. In the end, the classification result of whether the input microblog is rumor or not was given. Experimental results on two real-world microblog rumor datasets convincingly prove that MicroBlog-HAN model can accurately identify microblog rumors with an accuracy over 87%.
Since the isotropic diffusion will easily blur edge features,and coherence-enhancing diffusion will produce pseudo striations in the background regions during the denoising process, a weighted diffusion algorithm was proposed to reduce the Rician noise of Magnetic Resonance Imaging (MRI) image according to the distribution of noise. A threshold value was calculated by the Rician noise variance in the background region of MRI image, which might be used to distinguish the image background and the edge of Region-Of-Interest (ROI). A weighting function combining the isotropic diffusion and the coherence-enhancing diffusion based on the calculated value was constructed. The constructed function could adaptively adjust the weight values of two kinds of diffusion in different structural regions in order to give full play to the advantages while overcoming the disadvantages of the above two kinds of diffusion.The experimental results show that it is better than some classical diffusion algorithms in Peak Signal-to-Noise Ratio (PSNR) and Mean Structural Similarity(MSSIM).Thus, it has better performance on noise reduction and edge preservation or enhancement.